10 research outputs found

    Situation inference and context recognition for intelligent mobile sensing applications

    Get PDF
    The usage of smart devices is an integral element in our daily life. With the richness of data streaming from sensors embedded in these smart devices, the applications of ubiquitous computing are limitless for future intelligent systems. Situation inference is a non-trivial issue in the domain of ubiquitous computing research due to the challenges of mobile sensing in unrestricted environments. There are various advantages to having robust and intelligent situation inference from data streamed by mobile sensors. For instance, we would be able to gain a deeper understanding of human behaviours in certain situations via a mobile sensing paradigm. It can then be used to recommend resources or actions for enhanced cognitive augmentation, such as improved productivity and better human decision making. Sensor data can be streamed continuously from heterogeneous sources with different frequencies in a pervasive sensing environment (e.g., smart home). It is difficult and time-consuming to build a model that is capable of recognising multiple activities. These activities can be performed simultaneously with different granularities. We investigate the separability aspect of multiple activities in time-series data and develop OPTWIN as a technique to determine the optimal time window size to be used in a segmentation process. As a result, this novel technique reduces need for sensitivity analysis, which is an inherently time consuming task. To achieve an effective outcome, OPTWIN leverages multi-objective optimisation by minimising the impurity (the number of overlapped windows of human activity labels on one label space over time series data) while maximising class separability. The next issue is to effectively model and recognise multiple activities based on the user's contexts. Hence, an intelligent system should address the problem of multi-activity and context recognition prior to the situation inference process in mobile sensing applications. The performance of simultaneous recognition of human activities and contexts can be easily affected by the choices of modelling approaches to build an intelligent model. We investigate the associations of these activities and contexts at multiple levels of mobile sensing perspectives to reveal the dependency property in multi-context recognition problem. We design a Mobile Context Recognition System, which incorporates a Context-based Activity Recognition (CBAR) modelling approach to produce effective outcome from both multi-stage and multi-target inference processes to recognise human activities and their contexts simultaneously. Upon our empirical evaluation on real-world datasets, the CBAR modelling approach has significantly improved the overall accuracy of simultaneous inference on transportation mode and human activity of mobile users. The accuracy of activity and context recognition can also be influenced progressively by how reliable user annotations are. Essentially, reliable user annotation is required for activity and context recognition. These annotations are usually acquired during data capture in the world. We research the needs of reducing user burden effectively during mobile sensor data collection, through experience sampling of these annotations in-the-wild. To this end, we design CoAct-nnotate --- a technique that aims to improve the sampling of human activities and contexts by providing accurate annotation prediction and facilitates interactive user feedback acquisition for ubiquitous sensing. CoAct-nnotate incorporates a novel multi-view multi-instance learning mechanism to perform more accurate annotation prediction. It also includes a progressive learning process (i.e., model retraining based on co-training and active learning) to improve its predictive performance over time. Moving beyond context recognition of mobile users, human activities can be related to essential tasks that the users perform in daily life. Conversely, the boundaries between the types of tasks are inherently difficult to establish, as they can be defined differently from the individuals' perspectives. Consequently, we investigate the implication of contextual signals for user tasks in mobile sensing applications. To define the boundary of tasks and hence recognise them, we incorporate such situation inference process (i.e., task recognition) into the proposed Intelligent Task Recognition (ITR) framework to learn users' Cyber-Physical-Social activities from their mobile sensing data. By recognising the engaged tasks accurately at a given time via mobile sensing, an intelligent system can then offer proactive supports to its user to progress and complete their tasks. Finally, for robust and effective learning of mobile sensing data from heterogeneous sources (e.g., Internet-of-Things in a mobile crowdsensing scenario), we investigate the utility of sensor data in provisioning their storage and design QDaS --- an application agnostic framework for quality-driven data summarisation. This allows an effective data summarisation by performing density-based clustering on multivariate time series data from a selected source (i.e., data provider). Thus, the source selection process is determined by the measure of data quality. Nevertheless, this framework allows intelligent systems to retain comparable predictive results by its effective learning on the compact representations of mobile sensing data, while having a higher space saving ratio. This thesis contains novel contributions in terms of the techniques that can be employed for mobile situation inference and context recognition, especially in the domain of ubiquitous computing and intelligent assistive technologies. This research implements and extends the capabilities of machine learning techniques to solve real-world problems on multi-context recognition, mobile data summarisation and situation inference from mobile sensing. We firmly believe that the contributions in this research will help the future study to move forward in building more intelligent systems and applications

    Predicting the city foot traffic with pedestrian sensor data

    No full text
    In this paper, we focus on developing a model and system for predicting the city foot traffic. We utilise historical records of pedestrian counts captured with thermal and laser-based sensors installed at multiple locations throughout the city. A robust prediction system is proposed to cope with various temporal foot traffic patterns. The empirical evaluation of our experiment shows that the proposed ARIMA model is effective in modelling both weekdays and weekend patterns, outperforming other state-of-art models for short-term prediction of pedestrian counts. The model is capable of accurately predicting pedestrian numbers up to 16 days in advance, on multiple look-ahead times. Our system is evaluated with a real-world sensor dataset supplied by the City of Melbourne

    AutoJammin' - Designing progression in traffic and music

    No full text
    Since the early days of automotive entertainment, music has played a crucial role in establishing pleasurable driving experiences. Future autonomous driving technologies will relieve the driver from the responsibility of driving and will allow for more interactive types of non-driving activities. However, there is a lack of research on how the liberation from the driving task will impact in-car music experiences. In this paper we present AutoJam, an interactive music application designed to explore the potential of (semi-) autonomous driving. We describe how the AutoJam prototype capitalizes on the context of the driving situation as structural features of the interactive music system. We report on a simulator pilot study and discuss participants' driving experience with AutoJam in traffic. By proposing design implications that help to re- connect music entertainment with the driving experience of the future, we contribute to the design space for autonomous driving experiences

    QDaS: Quality driven data summarisation for effective storage management in Internet of Things

    No full text
    The proliferation of Internet of Things (IoT) has led to the emergence of enabling many interesting applications within the realm of several domains including smart cities. However, the accumulation of data from smart IoT devices poses significant challenges for data storage while there are needs to deliver relevant and high quality services to consumers. In this paper, we propose QDaS, a novel domain agnostic framework as a solution for effective data storage and management of IoT applications. The framework incorporates a novel data summarisation mechanism that uses an innovative data quality estimation technique. This proposed data quality estimation technique computes the quality of data (based on their utility) without requiring any feedback from users of this IoT data or domain awareness of the data. We evaluate the effectiveness of the proposed QDaS framework using real world datasets

    Improving Experience Sampling with Multi-view User-driven Annotation Prediction

    No full text
    A fundamental challenge in real-time labelling of activity data is user burden. The Experience Sampling Method (ESM) is widely used to obtain such labels for sensor data. However, in an in-situ deployment, it is not feasible to expect users to precisely label the start and end time of each event or activity. For this reason, time-point based experience sampling (without an actual start and end time) is prevalent. We present a framework that applies multi-instance and semi-supervised learning techniques to perform to predict user annotations from multiple mobile sensor data streams. Our proposed framework estimates users' annotations in ESM-based studies progressively, via an interactive pipeline of co-training and active learning. We evaluate our work using data collected from an in-the-wild data collection

    ProMETheus: An Intelligent Mobile Voice Meeting Minutes System

    No full text
    In this paper, we focus on designing and developing ProMETheus, an intelligent system for meeting minutes generated from audio data. The first task in ProMETheus is to recognize the speakers from noisy audio data. Speaker recognition algorithm is used to automatically identify who is speaking according to the speech in an audio data. Naturally, speech recognition will transcribe speakers' audio to text so that ProMETheus can generate the complete meeting text with speakers' name chronologically. In order to show the subject of the meeting and the agreed action, we use text summarization algorithm that can extract meaningful key phrases and summary sentences from the complete meeting text. In addition, sentiment analysis for meeting text of different speakers can make the agreed action more humane due to calculating the relevance score of each course by the sentiment and attitude in text tone. The ProMETheus is capable of accurately summarizing the meeting and analyzing the agreed action. Our robust system is evaluated on a real-world audio meeting dataset that involves multiple speakers in each meeting session

    Intelligent task recognition: Towards enabling productivity assistance in daily life

    Get PDF
    We introduce the novel research problem of task recognition in daily life. We recognize tasks such as project management, planning, meal-breaks, communication, documentation, and family care. We capture Cyber, Physical, and Social (CPS) activities of 17 participants over four weeks using device-based sensing, app activity logging, and an experience sampling methodology. Our cohort includes students, casual workers, and professionals, forming the first real-world context-rich task behaviour dataset. We model CPS activities across different task categories, results highlight the importance of considering the CPS feature sets in modelling, especially work-related tasks

    Task Intelligence for Search and Recommendation

    No full text
    corecore